16 research outputs found

    Bio-Inspired Motion Strategies for a Bimanual Manipulation Task

    Get PDF
    Steffen JF, Elbrechter C, Haschke R, Ritter H. Bio-Inspired Motion Strategies for a Bimanual Manipulation Task. In: International Conference on Humanoid Robots (Humanoids). 2010

    Discriminating Liquids Using a Robotic Kitchen Assistant

    Get PDF
    Elbrechter C, Maycock J, Haschke R, Ritter H. Discriminating Liquids Using a Robotic Kitchen Assistant. Presented at the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2015), Hamburg, Germany.A necessary skill when using liquids in the preparation of food is to be able to estimate viscosity, e.g. in order to control the pouring velocity or to determine the thickness of a sauce. We introduce a method to allow a robotic kitchen assistant discriminate between different but visually similar liquids. Using a Kinect depth camera, surface changes, induced by a simple pushing motion, are recorded and used as input to nearest neighbour and polynomial regression classification models. Results reveal that even when the classifier is trained on a relatively small dataset it generalises well to unknown containers and liquid fill rates. Furthermore, the regression model allows us to determine the approximate viscosity of unknown liquids

    Approaching Manual Intelligence

    Get PDF
    Maycock J, Dornbusch D, Elbrechter C, Haschke R, Schack T, Ritter H. Approaching Manual Intelligence. KI - Künstliche Intelligenz. 2010;24(4):287-294.Grasping and manual interaction for robots so far has largely been approached with an emphasis on physics and control aspects. Given the richness of human manual interaction, we argue for the consideration of the wider field of ”manual intelligence” as a perspective for manual action research that brings the cognitive nature of human manual skills to the foreground. We briefly sketch part of a research agenda along these lines, argue for the creation of a manual interaction database as an important cornerstone of such an agenda, and describe the manual interaction lab recently set up at CITEC to realize this goal and to connect the efforts of robotics and cognitive science researchers towards making progress for a more integrated understanding of manual intelligence

    Towards Anthropomorphic Robotic Paper Manipulation

    No full text
    Elbrechter C. Towards Anthropomorphic Robotic Paper Manipulation. Bielefeld: Universität Bielefeld; 2020.The dream of robotics researchers to one day be able to build intelligent multi-purpose household robots that can aid humans in their everyday lives, with the inherent necessity that they are able to interact in general environments, demands that such robots have dramatically improved abilities for real-time perception, dynamic navigation, and closed-loop manipulation. While feed-forward robotic manipulation of rigid objects, ubiquitous in manufacturing plants, is well understood, a particularly interesting challenge for household robots is the ability to manipulate deformable objects such as laundry, packaging, food-items or paper. Given the fact that most objects in our homes are explicitly tuned to be grasped, used and manipulated by human hands, transitioning from traditional robot grippers to anthropomorphic robot hands seems like a necessity. Once we had narrowed our focus to anthropomorphic robot hands, a suitable domain of exploration within the possible set of deformable objects was sought. We chose paper manipulation, which poses many unsolved challenges along the conceptional axes of perception, modeling and robot control. On reflection, it was an excellent choice as it forced us to consider the peculiar nature of this every- day material at a very deep level, taking into consideration properties such as material memory and elasticity. We followed a bottom-up approach, employing an extensible set of primitive and atomic interaction skills (basic action primitives) that could be hierarchically combined to realize ever increasingly sophisticated higher level actions. Along this path, we conceptualized, implemented and thoroughly evaluated three iterations of complex robotic systems for the shifting, picking up and folding of a sheet of paper. Which each iteration it was necessary to significantly increase the abilities of our system. While our developed systems employed an existing bi-manual anthro- pomorphic robot setup and low level robot control interface, all visual-perception and modeling related tools were implemented from the ground up using our own C++ computer-vision library, ICL. Pushing a piece of paper across a table to a friend is an ability we acquire from a very early age. While seemingly trivial, even this task, which was the first we tackled, throws up interesting hurdles in terms of end-state comfort considerations and the need for closed loop controllers to robustly execute the movement. In our next scenario the paper could no longer be treated as a rigid object, in fact its deformable nature was exploited to facilitate a complex picking-up procedure. Fiducial markers were added to the paper to aid visual tracking and two distinct models were employed and evaluated: a mathematical one and a physics-based one. For our final, fully implemented, system, the robot succeeded in folding a sheet of paper in half using a complex sequence of alternating and in parallel hand movements. Achieving this remarkably difficult feat required us to make further significant improvements to our visual detection setup and a mechanism to model folds in the physics engine had to be implemented. Removing the prerequisite that the paper is covered with fiducial markers was an important hurdle that we overcame using a combination of 3D point cloud and 2D SURF feature registration. Finally, our bottom-up approach to robotic pa- per manipulation was conceptually extended by the generation of a set of hierarchically organized basic action primitives. The generalization of our approach was verified by applying it to other kinds of deformable, but non-paper, objects. We believe that a thorough understanding of strategies for dexterous robotic manipulation of paper- like objects and their replication in an anthropomorphic bi-manual robot setup provides a significant step towards a synthesis of the manual intelligence that we see at work when handling non-rigid objects with our own, human hands

    Folding Paper with Anthropomorphic Robot Hands using Real-Time Physics-Based Modeling

    Get PDF
    Elbrechter C, Haschke R, Ritter H. Folding Paper with Anthropomorphic Robot Hands using Real-Time Physics-Based Modeling. In: 12th IEEE-RAS International Conference on Humanoid Robots (Humanoids 2012). Piscataway, NJ: IEEE; 2012.The ability to manipulate deformable objects, such as textiles or paper, is a major prerequisite to bringing the capabilities of articulated robot hands closer to the level of manual intelligence exhibited by humans. We concentrate on the manipulation of paper, which affords us a rich interaction domain that has not yet been solved for anthropomorphic robot hands. Robust tracking and physically plausible modeling of the paper as well as feedback based robot control are crucial components for this task. This paper makes two novel contributions to this area. The first concerns real-time modeling and visual tracking. Our technique not only models the bending of a sheet of paper, but also paper crease lines which allows us to monitor deformations. The second contribution concerns enabling an anthropomorphic robot to fold paper, and is accomplished by introducing a set of tactile- and vision-based closed loo

    3D Scene Segmentation for Autonomous Robot Grasping

    Get PDF
    Ückermann A, Elbrechter C, Haschke R, Ritter H. 3D Scene Segmentation for Autonomous Robot Grasping. Presented at the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2012), Algarve, Portugal.We present an algorithm to segment an unstructured table top scene. Operating on the depth image of a Kinect camera, the algorithm robustly separates objects of previously unknown shape in cluttered scenes of stacked and partially occluded objects. The model-free algorithm finds smooth surface patches which are subsequently combined to form object hypotheses. We evaluate the algorithm regarding its robustness and real-time capabilities and discuss its advantages compared to existing approaches as well as its weak spots to be addressed in future work. We also report on an autonomous grasping experiment with the Shadow Robot Hand which employs the estimated shape and pose of segmented objects

    Integrating vision, haptics and proprioception into a feedback controller for in-hand manipulation of unknown objects

    Get PDF
    Li Q, Elbrechter C, Haschke R, Ritter H. Integrating vision, haptics and proprioception into a feedback controller for in-hand manipulation of unknown objects. Presented at the IROS2013

    Bi-Manual Robotic Paper Manipulation Based on Real-Time Marker Tracking and Physical Modelling

    No full text
    Elbrechter C, Haschke R, Ritter H. Bi-Manual Robotic Paper Manipulation Based on Real-Time Marker Tracking and Physical Modelling. In: International Conference on Intelligent Robots and Systems (IROS 2011). Piscataway, NJ: IEEE; 2011.The ability to manipulate deformable objects, such as textiles or paper, is a major prerequisite to bringing the capabilities of articulated robot hands closer to the level of manual intelligence that is the basis of most of our manual skills. We concentrate on the manipulation of paper, which affords us a rich interaction domain that has at yet not been solved for anthropomorphic robot hands. A key ability for this domain is the robust tracking and modelling of paper under conditions of occlusion and strong deformation. We present a marker based framework that realizes these properties robustly and in real-time. We compare a purely mathematical representation of a 2D paper manifold in 3D space with a soft-body-physics paper model and demonstrate the use of our visual tracking method to facilitate the coordination of two anthropomorphic 24 DOF Shadow Dexterous Hands while they grasp a flat-lying piece of paper using a combination of visually guided bulging and pinching

    Supplementary Material for "AudioDB: Get in Touch with Sounds"

    No full text
    Bovermann T, Elbrechter C, Hermann T, Ritter H. Supplementary Material for "AudioDB: Get in Touch with Sounds". Bielefeld University; 2008.<img src="https://pub.uni-bielefeld.de/download/2698572/2702925" width="200" style="float:right;" > Digital audio in its various appearances is ubiquitous in our everyday life. Searching and sorting of sounds collected in extensive databases e.g. sampling libraries for musical production or seismographical surveys is difficult and often bound to tight restrictions of the used standard human-computer interface technique of keyboard and mouse. Also the common technique of tagging sounds and other media files has the drawback that it needs descriptive words, which is a not to underestimated difficulty for sounds. We therefore created AudioDB, an intuitive human computer interface to interactively explore sound by representing them as physical artifacts (grains) on a tabletop surface. The system is capable for sonic sorting, grouping and selecting of sounds represented as physical artifacts, and can therefore serve as a basis for discussions on audio-related tasks in working teams. AudioDB however is not a special solution for problems appearing in a dedicated field of work, but is moreover designed as an easy-to-use multi purpose tool for audio-based information. As a side-effect, AudioDB can be used for grounding work on how humans handle digital information that is projected onto physical artifacts

    Real-Time Hierarchical Scene Segmentation and Classification

    No full text
    Ückermann A, Elbrechter C, Haschke R, Ritter H. Real-Time Hierarchical Scene Segmentation and Classification. In: 2014 IEEE-RAS International Conference on Humanoid Robots. Piscataway, NJ: IEEE; 2014: 225-231.We present an extension to our previously reported real-time scene segmentation approach which generates a complete hierarchy of segmentation hypotheses. An object classifier traverses the hypotheses tree in a top-down manner, returning good object hypotheses and thus helping to select the correct level of abstraction for segmentation and avoiding over- and under-segmentation. Combining model-free, bottom-up segmentation results with trained, top-down classification results, our approach improves both classification and segmentation results. It allows for identification of object parts and complete objects (e.g. a mug composed from the handle and its inner and outer surfaces) in a uniform and scalable framework. We discuss its advantages compared to existing approaches and present qualitative results. Finally, the approach is applied in an interactive robotics scenario to help the robot grasp objects in response to verbal commands
    corecore